Anthropic loosens safety rules while AI race heats up
Sign up now: Get ST's newsletters delivered to your inbox
Anthropic is racing against OpenAI, Alphabet’s Google and Mr Elon Musk’s xAI for dominance in what many view as a revolutionary new technology.
PHOTO: REUTERS
Anthropic, known for its commitment to artificial intelligence safeguards, has loosened its central safety policy, saying the move is necessary to keep pace in a rapidly changing field.
The company in 2023 said in its Responsible Scaling Policy that it would delay AI development that might be dangerous. In a blog post on Feb 24, Anthropic said it was updating its rules to say it would no longer do so if it believes it lacks a significant lead over a competitor.
“The policy environment has shifted towards prioritising AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level,” Anthropic said in its post.
Recently valued at US$380 billion (S$480 billion), Anthropic is racing against OpenAI, Alphabet’s Google and Mr Elon Musk’s xAI for dominance in what many view as a revolutionary new technology.
“From the beginning, we’ve said the pace of AI and uncertainties in the field would require us to rapidly iterate and improve the policy,” an Anthropic spokeswoman said.
The updated policy, which was earlier reported by Time, coincides with a growing dispute with the US Defense Department over Anthropic’s insistence on guardrails for use of its Claude AI tool.
The Pentagon on Feb 24 threatened to invoke a Cold War-era law to compel Anthropic to allow the US military to use the start-up’s technology
Anthropic is also making a bigger push into the legal industry, recently announcing partnerships with LegalZoom, Harvey and Intapp that can connect their legal resources with Claude. BLOOMBERG


